7. Practical Application (Practice)

7.1 How we apply Lean thinking in this practice

We take the Lean ideas from the outline, waste, flow, pull, and improvement mindset, then we apply it to one real-looking operation situation. We read the charts in order to define what problem is, then we connect signal together, and only after that we propose solution. In this dataset, we focus on flow stability, constraint behavior, quality loop, and inventory timing, because these are the common driver that make operation look busy but customer still wait.

7.1.1 Story map

7.1.2 Load data

7.1.3 What we want to do here

We want to apply all of knowledge which we learn to use in realiaty, not only theory, like how we would do in a factory. We start from demand pressure, then we follow the flow inside the value stream, we check waiting and bottleneck, we confirm hidden rework loops, then we look at WIP and inventory pain. After we define the real problem, we apply Lean solution and we use LP/network flow to plan in a more stable way

7.1.4 Thing to keep in mind

Eventhough this case is based on synthetic data generation, it is not a real company data. But the goal of us is to practice the logic, how we read signals, define problem and decide what action and what model to solve

7.2 Demand shape with capacity context

The demand line show variability and some spike. Demand itself is not waste, but it become risk when the system can not absorb it. In the dataset, average weekly demand is about 2811 unit, and peak week reach around 4836 unit. Our capacity proxy is around 7001 unit per week, so the real issue is not only total capacity on paper. The issue is how the flow absorb variability, how stable the release is, and whether the mix is correct.

7.3 Throughput vs demand gap

We compare weekly finished throughput (PACK output) against weekly demand due. On average we are not always short, but the gap swing a lot. In the worst shortage week, the gap go down to about -3387 unit, and we see 13 week with negative gap. This kind of swing is a common reason for expediting and late shipment, it also push planning to do batching behavior.

7.4 Value stream breakdown (cycle vs waiting)

This chart separate processing time and waiting time by step. The first thing we watch is waiting, because waiting is pure waste, it is time where product not getting value. In the data, the step with the highest average waiting is QC_INLINE, with average waiting around 61.8 minutes, and p95 waiting around 145.7 minutes. This p95(We use p95 to capture the tail risk of waiting time. Mean show normal behavior,p95 is the tail) matters, because it show the ugly tail, when flow is really unstable and batch arrive together.

When we see waiting happening downstream, we usually do not blame that station immediately. Downstream waiting often mean upstream release is not stable, rework loop or batching make work arrive in wave. So the value stream chart is not only a local efficiency chart, it is a flow chart.

7.5 Bottleneck confirmation (utilization heatmap)

The heatmap show which line is close to full utilization. A line with high utilization for many weeks is a real constraint signal. When constraint has low slack, small variability turn into queue, and queue turn into waiting and WIP. That is why we can see waiting and late delivery even if the factory is not always low throughput. It is constraint variability, not just average speed.

7.6 Quality signal and Pareto defects (quality loss = capacity loss)

Quality loss is not only quality topic, it is capacity topic. In the data, average reject rate is about 2.8%, and average rework rate is about 0.7%. On peak weeks, reject can reach about 3.3%, and rework can reach about 1.1%. These losses reduce effective output, and they also disturb flow because rework create loop.

The Pareto chart help us see whether defects are systematic. In our data, the top defect is THREAD_BREAK, and it takes around 16.3% of all defects. If we fix top 5 defects, we can potentially recover around 72% of defect quantity. This is why Pareto is like a ROI curve for improvement, we fix the big driver first, not everything at once.

7.7 WIP build-up (congestion inside the system)

WIP is work trapped in the system. When WIP rise, it does not mean throughput rise. Often it mean congestion. In the data, average WIP count is about 11.7 items in progress at mid-day, and peak reach 21. We use the WIP-by-step view to see where the congestion sit. If WIP concentrate around certain step, that is where flow is being blocked, usually near the constraint or near rework loop.

7.8 Inventory imbalance (on-hand and backorder)

Inventory snapshot show the pain in a very direct way. In the data, weekly backoder can go up to 1.2723^{4} unit, and weekly on-hand can go up to 6810 unit. The important Lean pattern is when both on-hand and backoder are high in the same period. We observe this happen in 2 week in the top quartile range, so the system can build stock but still fail customer. This usually mean poor synchronization, wrong timing, wrong mix, or wrong location allocation.

7.9 Delivery lateness (customer impact)

Delivery lateness is the final output symptom. In the data, about 20% of shipments are late. Among late shipments, p95 late days is around 4.2 day. This is consistent with a system that has unstable flow and rework loop, because even if total production is not terrible, timing is unstable.

7.10 From diagnosis to solution (Lean first, then Optimization)

At this point we already define the problem with number, demand variability is present, throughput gap swing, waiting and WIP show instability, quality loss steal capacity, inventory show imbalance, and shipment show customer pain. So the solution focus on flow control and remove systematic causes, not only pushing people to work faster.

We start with Lean action around the constraint, controlling release (pull discipline, WIP limit), reducing batching, and making the constraint more stable (setup reduction, standard work). We also focus quality at source using Pareto defects, because less rework mean more stable flow. After dianosing waste and flow problems, we use a weekly LP for production planning. The LP does not replace Lean, it support it. It give a feasible plan under capacity constraint, and it help reduce the mismatch between what we produce and what customer need in timing.

7.10.1 Optimization 1 — Production planning LP

7.10.2 Optimization 2 — Network flow (WH → DC)

In Section 7.3 we already see week with big positive production gap, so it look like we have enough output. But in reality service can still fail at DC level. This is because the extra units can be the wrong SKU mix, or it can stay in the wrong place (stock sits in WH or in another DC). Also even if the stock is correct, lane capacity can limit how fast we move unit from WH to DC, so demand is “there” but shipment cannot catch up. And one more point, demand is need-now by week, while production is often released in batch, so timing can be wrong even when total volume is high. That is why we use network flow here, to make the allocation and capacity limits visible, not only the total output

7.11 Validation (before vs after impact)

Finally we compare BEFORE vs AFTER using the same measurement logic. After of all chart make us need to know what we fix. In the AFTER scenaria, we expect waiting and WIP to be lower and more stable, reject and rework to reduce, inventory to be less imbalanced, and shipment lateness to improve. The chart below show the direction of change and make the story easy to defend in a discussion.

7.12 Data trust checks (sanity + relationship)

This is a quick trust check, not a claim of “perfect truth”. We want to show the synthetic data behave like a real operating system, meaning the key wastes are connected in a reasonable way. We build a weekly waste table from the raw record, then we check the correlation matrix between main signal. If WIP tends to move with waiting, and backorder tends to move with late delivery, and quality loss sometimes moves with waiting during stressed weeks, then the data is coherent enough for Lean analysis and optimization.

## # A tibble: 31 × 15
##    week       demand_units produced_units rejected_units rework_units
##    <date>            <dbl>          <dbl>          <dbl>        <dbl>
##  1 2024-12-30           NA           3061             83           20
##  2 2025-01-06          241          11278            286           79
##  3 2025-01-13          852          20941            581          152
##  4 2025-01-20         2744          14809            411          115
##  5 2025-01-27         1905          15058            405           98
##  6 2025-02-03         1945          21147            582          144
##  7 2025-02-10         1883          23802            687          181
##  8 2025-02-17         4298          18544            511          128
##  9 2025-02-24         3601          15675            426          119
## 10 2025-03-03         3269          17813            516          124
## # ℹ 21 more rows
## # ℹ 10 more variables: machine_hours <dbl>, avg_wait_min <dbl>,
## #   p95_wait_min <dbl>, avg_wip <dbl>, on_hand_units <dbl>,
## #   backorder_units <dbl>, late_rate <dbl>, reject_rate <dbl>,
## #   rework_rate <dbl>, service_gap <dbl>

We also keep MECE thinking for the storyline. Demand vs capacity is the trigger, value stream and WIP cover waiting and inventory inside the flow, quality chart cover defect and rework, utilization covers bottleneck evidence, inventory and delivery cover customer pain. This helps the report stay structured and not repetitive.